36 research outputs found

    Learning to Communicate with Deep Multi-Agent Reinforcement Learning

    Full text link
    We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains

    Learning with Opponent-Learning Awareness

    Full text link
    Multi-agent settings are quickly gathering importance in machine learning. This includes a plethora of recent work on deep multi-agent reinforcement learning, but also can be extended to hierarchical RL, generative adversarial networks and decentralised optimisation. In all these settings the presence of multiple learning agents renders the training problem non-stationary and often leads to unstable training or undesired final results. We present Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in the environment. The LOLA learning rule includes a term that accounts for the impact of one agent's policy on the anticipated parameter update of the other agents. Results show that the encounter of two LOLA agents leads to the emergence of tit-for-tat and therefore cooperation in the iterated prisoners' dilemma, while independent learning does not. In this domain, LOLA also receives higher payouts compared to a naive learner, and is robust against exploitation by higher order gradient-based methods. Applied to repeated matching pennies, LOLA agents converge to the Nash equilibrium. In a round robin tournament we show that LOLA agents successfully shape the learning of a range of multi-agent learning algorithms from literature, resulting in the highest average returns on the IPD. We also show that the LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making the method suitable for model-free RL. The method thus scales to large parameter and input spaces and nonlinear function approximators. We apply LOLA to a grid world task with an embedded social dilemma using recurrent policies and opponent modelling. By explicitly considering the learning of the other agent, LOLA agents learn to cooperate out of self-interest. The code is at github.com/alshedivat/lola

    Control of Vocal and Respiratory Patterns in Birdsong: Dissection of Forebrain and Brainstem Mechanisms Using Temperature

    Get PDF
    Learned motor behaviors require descending forebrain control to be coordinated with midbrain and brainstem motor systems. In songbirds, such as the zebra finch, regular breathing is controlled by brainstem centers, but when the adult songbird begins to sing, its breathing becomes tightly coordinated with forebrain-controlled vocalizations. The periods of silence (gaps) between song syllables are typically filled with brief breaths, allowing the bird to sing uninterrupted for many seconds. While substantial progress has been made in identifying the brain areas and pathways involved in vocal and respiratory control, it is not understood how respiratory and vocal control is coordinated by forebrain motor circuits. Here we combine a recently developed technique for localized brain cooling, together with recordings of thoracic air sac pressure, to examine the role of cortical premotor nucleus HVC (proper name) in respiratory-vocal coordination. We found that HVC cooling, in addition to slowing all song timescales as previously reported, also increased the duration of expiratory pulses (EPs) and inspiratory pulses (IPs). Expiratory pulses, like song syllables, were stretched uniformly by HVC cooling, but most inspiratory pulses exhibited non-uniform stretch of pressure waveform such that the majority of stretch occurred late in the IP. Indeed, some IPs appeared to change duration by the earlier or later truncation of an underlying inspiratory event. These findings are consistent with the idea that during singing the temporal structure of EPs is under the direct control of forebrain circuits, whereas that of IPs can be strongly influenced by circuits downstream of HVC, likely in the brainstem. An analysis of the temporal jitter of respiratory and vocal structure suggests that IPs may be initiated by HVC at the end of each syllable and terminated by HVC immediately before the onset of the next syllable.United States. National Institutes of Health (Grant R01 DC009183)United States. National Institutes of Health (Grant R01 MH067105

    K-level Reasoning for Zero-Shot Coordination in Hanabi

    Full text link
    The standard problem setting in cooperative multi-agent settings is self-play (SP), where the goal is to train a team of agents that works well together. However, optimal SP policies commonly contain arbitrary conventions ("handshakes") and are not compatible with other, independently trained agents or humans. This latter desiderata was recently formalized by Hu et al. 2020 as the zero-shot coordination (ZSC) setting and partially addressed with their Other-Play (OP) algorithm, which showed improved ZSC and human-AI performance in the card game Hanabi. OP assumes access to the symmetries of the environment and prevents agents from breaking these in a mutually incompatible way during training. However, as the authors point out, discovering symmetries for a given environment is a computationally hard problem. Instead, we show that through a simple adaption of k-level reasoning (KLR) Costa Gomes et al. 2006, synchronously training all levels, we can obtain competitive ZSC and ad-hoc teamplay performance in Hanabi, including when paired with a human-like proxy bot. We also introduce a new method, synchronous-k-level reasoning with a best response (SyKLRBR), which further improves performance on our synchronous KLR by co-training a best response.Comment: Neurips 2021. 15 pages. 2 figure

    Learning to Optimize Quasi-Newton Methods

    Full text link
    Fast gradient-based optimization algorithms have become increasingly essential for the computationally efficient training of machine learning models. One technique is to multiply the gradient by a preconditioner matrix to produce a step, but it is unclear what the best preconditioner matrix is. This paper introduces a novel machine learning optimizer called LODO, which tries to online meta-learn the best preconditioner during optimization. Specifically, our optimizer merges Learning to Optimize (L2O) techniques with quasi-Newton methods to learn preconditioners parameterized as neural networks; they are more flexible than preconditioners in other quasi-Newton methods. Unlike other L2O methods, LODO does not require any meta-training on a training task distribution, and instead learns to optimize on the fly while optimizing on the test task, adapting to the local characteristics of the loss landscape while traversing it. Theoretically, we show that our optimizer approximates the inverse Hessian in noisy loss landscapes and is capable of representing a wide range of inverse Hessians. We experimentally verify that our algorithm can optimize in noisy settings, and show that simpler alternatives for representing the inverse Hessians worsen performance. Lastly, we use our optimizer to train a semi-realistic deep neural network with 95k parameters at speeds comparable to those of standard neural network optimizers
    corecore